Observability Score Roll-Up Report
What is an Observability Score Roll-Up Report?
The Observability Score Roll-Up report provides aggregated scores of all adaptive rules (includes all data sets and all columns), as well as averages passing and breaking for all columns over 30 days. The scores are aggregated by check type and dimension.
You can generate an Observability Score Roll-Up report from the Reports page.
Steps
-
Sign in to the Collibra DQ instance and click the
Reports icon in the left navigation pane.
The Reports page opens. -
From the Reports page, click the Observability Score Roll-Up Report link.
-
To configure the type of chart configuration and data you want to see in the chart, click the
ellipsis icon.
-
Click
Chart Configuration.
The Chart Configuration window displays.Note As a best practice, keep the Chart Configuration as the default.
- For x-axis, select one of the following options from the X-Axis Column dropdown list:
- check_type
- dimension
- avg_observability_score
-
For y-axis, select one or more of the following options from the Y-Axis Columns dropdown list:
- check_type
- dimension
- avg_observability_score
-
For the type of chart you want to display, select one of the following options from the Chart Type dropdown list:
- Line
- Area
- Column
- Scatter
-
Click Save.
-
Navigate the pages of your report by clicking the Previous and Next pagination buttons, located bottom-right of the columns.
Sample SQL query
You can use the following sample SQL query for an Observability Score Roll-Up report.
with a as ( select dataset, col_nm from dataset_schema ),
b as ( select distinct dataset, 'Row Count', 'ROW_COUNT', 'AVAILABILITY' from dataset_scan where updt_ts >= (NOW() - INTERVAL '30 DAY') and updt_ts <= NOW()),
c as ( select * FROM
(VALUES
('NULL', 'COMPLETENESS'),
('EMPTY', 'COMPLETENESS'),
('TYPE', 'SCHEMA'),
('CARDINALITY', 'DISTRIBUTION'),
('MIN', 'CONFORMITY'),
('MAX', 'CONFORMITY'),
('MEAN', 'CONFORMITY')) as t (type, dimension) ),
d as ( select * from a,c ),
e as ( select * from d UNION (select * from b)),
f as ( select dataset as be_dataset, be_score, field, issue_type, run_id from behavior ),
g as ( select * from owl_catalog ),
h as ( select distinct dataset, run_id, rc from dataset_scan where updt_ts >= (NOW() - INTERVAL '30 DAY') and updt_ts < NOW()),
i as ( select e.dataset, col_nm, type, dimension, run_id, rc from e, h where e.dataset = h.dataset ),
j as ( select distinct dataset, col_nm, col_semantic from dataset_schema ),
k as ( select * from business_unit_to_dataset ),
l as ( select * from business_units ),
m as (
select i.col_nm, type,
case when be_score is null then 100 else be_score end as score, i.dimension,
i.run_id, rc, be_score, issue_type, i.dataset, g.source_name, g.source, g.table_nm, g.run_mode, g.data_concept_id, j.col_semantic,
l.name
from i
LEFT join f on f.be_dataset = i.dataset and f.field = i.col_nm and f.issue_type = i.type and f.run_id = i.run_id
INNER JOIN g on i.dataset = g.dataset
LEFT JOIN j on i.dataset = j.dataset and i.col_nm = j.col_nm
LEFT JOIN k on i.dataset = k.dataset
LEFT JOIN l on l.id = k.id
)
select avg(score) as avg_observability_score, type as check_type, dimension from m group by type, dimension order by dimension